skip to main content


Search for: All records

Creators/Authors contains: "Manoj, Naren"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Po-Ling Loh and Maxim Raginsky (Ed.)
    We give efficient deterministic one-pass streaming algorithms for finding an ellipsoidal approximation of a symmetric convex polytope. The algorithms are near-optimal in that their approximation factors differ from that of the optimal offline solution only by a factor sub-logarithmic in the aspect ratio of the polytope 
    more » « less
  2. A backdoor data poisoning attack is an adversarial attack wherein the attacker injects several watermarked, mislabeled training examples into a training set. The watermark does not impact the test-time performance of the model on typical data; however, the model reliably errs on watermarked examples. To gain a better foundational understanding of backdoor data poisoning attacks, we present a formal theoretical framework within which one can discuss backdoor data poisoning attacks for classification problems. We then use this to analyze important statistical and computational issues surrounding these attacks. On the statistical front, we identify a parameter we call the memorization capacity that captures the intrinsic vulnerability of a learning problem to a backdoor attack. This allows us to argue about the robustness of several natural learning problems to backdoor attacks. Our results favoring the attacker involve presenting explicit constructions of backdoor attacks, and our robustness results show that some natural problem settings cannot yield successful backdoor attacks. From a computational standpoint, we show that under certain assumptions, adversarial training can detect the presence of backdoors in a training set. We then show that under similar assumptions, two closely related problems we call backdoor filtering and robust generalization are nearly equivalent. This implies that it is both asymptotically necessary and sufficient to design algorithms that can identify watermarked examples in the training set in order to obtain a learning algorithm that both generalizes well to unseen data and is robust to backdoors. 
    more » « less
  3. null (Ed.)
    We show a hardness result for random smoothing to achieve certified adversarial robustness against attacks in the ℓp ball of radius ϵ when p>2. Although random smoothing has been well understood for the ℓ2 case using the Gaussian distribution, much remains unknown concerning the existence of a noise distribution that works for the case of p>2. This has been posed as an open problem by Cohen et al. (2019) and includes many significant paradigms such as the ℓ∞ threat model. In this work, we show that any noise distribution D over R^d that provides ℓp robustness for all base classifiers with p>2 must satisfy E[η_i^2]= Ω(d^(1−2/p) ϵ^2 (1−δ)/δ^2) for 99% of the features (pixels) of vector η∼D, where ϵ is the robust radius and δ is the score gap between the highest-scored class and the runner-up. Therefore, for high-dimensional images with pixel values bounded in [0,255], the required noise will eventually dominate the useful information in the images, leading to trivial smoothed classifiers. 
    more » « less
  4. null (Ed.)
    We show a hardness result for random smoothing to achieve certified adversarial robustness against attacks in the ℓp ball of radius ϵ when p>2. Although random smoothing has been well understood for the ℓ2 case using the Gaussian distribution, much remains unknown concerning the existence of a noise distribution that works for the case of p>2. This has been posed as an open problem by Cohen et al. (2019) and includes many significant paradigms such as the ℓ∞ threat model. In this work, we show that any noise distribution D over Rd that provides ℓp robustness for all base classifiers with p>2 must satisfy E[η_i^2]=Ω(d^(1−2/p) ϵ^2(1−δ)/δ^2) for 99% of the features (pixels) of vector η∼D, where ϵ is the robust radius and δ is the score gap between the highest-scored class and the runner-up. Therefore, for high-dimensional images with pixel values bounded in [0,255], the required noise will eventually dominate the useful information in the images, leading to trivial smoothed classifiers. 
    more » « less